Action Selection and the Basal Ganglia

  • We made a critter that has two modes:
    • move in the direction it was told to move in
    • or go back to where it started from
  • Which one of those things it did would depend on a separate input ('scary')

In [ ]:
import nengo
model = nengo.Network()
with model:
    stim_command = nengo.Node([0,0])
    
    command = nengo.Ensemble(n_neurons=100, dimensions=2)
    nengo.Connection(stim_command, command)
    
    motor = nengo.Ensemble(n_neurons=100, dimensions=2)
    #nengo.Connection(command, motor)
    
    nengo.Probe(command)
    nengo.Probe(motor)
    
    position = nengo.Ensemble(n_neurons=500, dimensions=2, radius=10)
    nengo.Connection(motor, position)
    nengo.Connection(position, position, synapse=0.1)
    
    nengo.Probe(position)
    
    stim_scary = nengo.Node([0])
    scary = nengo.Ensemble(100, dimensions=1)
    nengo.Connection(stim_scary, scary)
    
    dir_scared = nengo.Ensemble(100, dimensions=2)
    def dir_scared_func(x):
        return -x
    nengo.Connection(position, dir_scared, function=dir_scared_func)
    
    w_command = nengo.Ensemble(300, dimensions=3)
    nengo.Connection(scary, w_command[0])
    nengo.Connection(command, w_command[1:3])
    def command_func(x):
        w = 1 - x[0]
        return w*x[1], w*x[2]
    nengo.Connection(w_command, motor, function=command_func)
    
    nengo.Probe(scary)
    nengo.Probe(w_command)
    
    w_scary = nengo.Ensemble(300, dimensions=3)
    nengo.Connection(scary, w_scary[0])
    nengo.Connection(dir_scared, w_scary[1:3])
    def command_func(x):
        w = x[0]
        return w*x[1], w*x[2]
    nengo.Connection(w_scary, motor, function=command_func)
    
    nengo.Probe(scary)
    nengo.Probe(w_command)
  • This sort of system comes up a lot in cognitive models
    • A bunch of different possible actions
    • Pick one of them to do
  • How can we do this?

  • In the above example, we did it like this:

    • $m = (1-scary)(v_x, v_y) + (scary)(h_x, h_y)$
  • What about more complex situations?
    • What if there are three possible actions?
    • What if the actions involve different outputs?

Action Selection and Execution

  • This is known as action selection
  • Often divided into two parts:
    • Action selection (identifying the action to perform)
    • Action execution (actually performing the action)
  • Actions can be many different things
    • physical movements
    • moving attention
    • changing contents of working memory ("1, 2, 3, 4, ...")
    • recalling items from long-term memory

Action Selection

  • How can we do this?
  • We have a bunch of different possible actions
    • go to where we started from
    • go in the direction we are told to go in
    • move randomly
    • go towards food
    • go away from predators
  • Which one do we pick?
    • Ideas?

Reinforcement Learning

  • Let's steal an idea from reinforcement learning
  • Lots of different actions, learn to pick one
  • Each action has a utility $Q$ that depends on the current state $s$
    • $Q(s, a)$
  • Pick the action that has the largest $Q$
  • Note
    • Lots of different variations on this
    • $V(s)$
    • softmax: $p(a_i) = e^{Q(s, a_i)/T} / \sum_i e^{Q(s, a_i)/T}$
  • In Reinforcement Learning research, people come up with learning algorithms for adjusting $Q$ based on rewards
  • Let's not worry about that for now and just use the basic idea
    • There's some sort of state $s$
    • For each action $a_i$, compute $Q(s, a_i)$ which is a function that we can define
    • Take the biggest

Implementation

  • One group of neurons to represent the state $s$
    • Or many different groups of neurons representing different parts of $s$
  • One group of neurons for each action's utility $Q(s, a_i)$
    • Or one large group of neurons for all the $Q$ values
  • Connect the $s$ neurons to the $Q$ neurons with functions that compute $Q(s, a_i)$

  • What should the output be?

    • We could have $index$, which is the index $i$ of the action with the largest $Q$ value
    • Or we could have something like $[0,0,1,0]$, indicating which action is selected
  • The second option seems easier if we consider that we have to do action execution next...

A Simple Example

  • State $s$ is a semantic pointer
    • 16-dimensional
  • Four actions (A, B, C, D)
  • Do action A if $s$ is near CAT, B if near DOG, C if near RAT, D if near COW
    • $Q(s, a_A)=s \cdot$ CAT
    • $Q(s, a_B)=s \cdot$ DOG
    • $Q(s, a_C)=s \cdot$ RAT
    • $Q(s, a_D)=s \cdot$ COW

In [ ]:
import nengo
import nengo.spa as spa

model = spa.SPA()
with model:
    state = nengo.Ensemble(500, 16, label='state')
    Q_A = nengo.Ensemble(50, 1, label='Q_A')
    Q_B = nengo.Ensemble(50, 1, label='Q_B')
    Q_C = nengo.Ensemble(50, 1, label='Q_C')
    Q_D = nengo.Ensemble(50, 1, label='Q_D')

    vocab = spa.Vocabulary(16)   
    model.config[state].vocab = vocab
    nengo.Connection(state, Q_A, transform=[vocab.parse('DOG').v])
    nengo.Connection(state, Q_B, transform=[vocab.parse('CAT').v])
    nengo.Connection(state, Q_C, transform=[vocab.parse('RAT').v])
    nengo.Connection(state, Q_D, transform=[vocab.parse('COW').v])
    
    nengo.Probe(state)
    nengo.Probe(Q_A)
    nengo.Probe(Q_B)
    nengo.Probe(Q_C)
    nengo.Probe(Q_D)
  • It's annoying to have all those separate $Q$ Ensembles
  • Nengo has an EnsembleArray capability to help with this
    • Doesn't change the model at all
    • It just groups things together for you

In [ ]:
import nengo
import nengo.spa as spa

model = spa.SPA()
with model:
    state = nengo.Ensemble(500, 16)    
    Q = nengo.networks.EnsembleArray(50, 4)

    vocab = spa.Vocabulary(16)   
    model.config[state].vocab = vocab
    nengo.Connection(state, Q.input, transform=[vocab.parse('DOG').v,
                                                vocab.parse('CAT').v,
                                                vocab.parse('RAT').v,
                                                vocab.parse('COW').v])

    nengo.Probe(state)
    nengo.Probe(Q.output)
  • How do we implement the $max$ function?
  • Well, it's just a function, so let's implement it
    • Need to combine all the $Q$ values into one 4-dimensional ensemble
    • Why?

In [ ]:
import nengo
import nengo.spa as spa

model = spa.SPA()
with model:
    state = nengo.Ensemble(500, 16)
    
    Q = nengo.networks.EnsembleArray(50, 4)

    vocab = spa.Vocabulary(16)   
    model.config[state].vocab = vocab
    nengo.Connection(state, Q.input, transform=[vocab.parse('DOG').v,
                                                vocab.parse('CAT').v,
                                                vocab.parse('RAT').v,
                                                vocab.parse('COW').v])
    nengo.Probe(state)
    nengo.Probe(Q.output)
    
    Q_together = nengo.Ensemble(200, 4)
    nengo.Probe(Q_together)
    nengo.Connection(Q.output, Q_together)
    
    def maximum(x):
        result = [0, 0, 0, 0]
        result[x.argmax()] = 1
        return result
    
    R = nengo.networks.EnsembleArray(50, 4)
    nengo.Connection(Q_together, R.input, function=maximum)
    nengo.Probe(R.output)
  • That doesn't seem to work very well
  • Very nonlinear function, so hard for neurons to approximate it well
    • Need to have all the $Q$ values represented in one ensemble
  • Other options?

The Standard Neural Network Approach (modified)

  • If you give this problem to a standard neural networks person, what would they do?
  • They'll say this is exactly what neural networks are great at
    • Implement this with mutual inhibition and self-excitation
  • Neural competition
    • 4 "neurons"
    • have excitation from each neuron back to themselves
    • have inhibition from each neuron to all the others
  • Now just put in the input and wait for a while and it'll stablize to one option
  • Can we do that?
  • Sure! Just replace each "neuron" with a group of neurons, and compute the desired function on those connections
    • note that this is a very general method of converting any non-realistic neural model into a biologically realistic spiking neuron model

In [ ]:
import nengo
import nengo.spa as spa

model = spa.SPA()
with model:
    state = nengo.Ensemble(500, 16)
    
    Q = nengo.networks.EnsembleArray(50, 4)

    vocab = spa.Vocabulary(16)   
    model.config[state].vocab = vocab
    nengo.Connection(state, Q.input, transform=[vocab.parse('DOG').v,
                                                vocab.parse('CAT').v,
                                                vocab.parse('RAT').v,
                                                vocab.parse('COW').v])
    nengo.Probe(state)
    nengo.Probe(Q.output)

    e = 0.1
    i = -1

    transform = [[e, i, i, i], [i, e, i, i], [i, i, e, i], [i, i, i, e]]
    nengo.Connection(Q.output, Q.input, transform=transform)
  • Oops, that's not quite right
  • Why is it selecting more than one action?

In [ ]:
import nengo
import nengo.spa as spa

model = spa.SPA()
with model:
    state = nengo.Ensemble(500, 16)
    
    Q = nengo.networks.EnsembleArray(50, 4)

    vocab = spa.Vocabulary(16)   
    model.config[state].vocab = vocab
    nengo.Connection(state, Q.input, transform=[vocab.parse('DOG').v,
                                                vocab.parse('CAT').v,
                                                vocab.parse('RAT').v,
                                                vocab.parse('COW').v])
    nengo.Probe(state)
    nengo.Probe(Q.output)

    e = 0.5
    i = -1

    def positive(x):
        if x[0]<0: return [0]
        else: return x
    Q.add_output('positive', positive)
    transform = [[e, i, i, i], [i, e, i, i], [i, i, e, i], [i, i, i, e]]
    nengo.Connection(Q.positive, Q.input, transform=transform)
  • Now we only influence other Actions when we have a positive value
    • Note: Is there a more neurally efficient way to do this?
  • Selects one action reliably
  • But we seem to have a bit of a lock-in effect
    • Adjust e and i?
    • Have an external reset signal?
    • Tend to have to trade off speed and this lock-in effect
  • Note that this speed is dependent on $e$, $i$, and the time constant of the neurotransmitter used
  • Can be hard to find good values
  • And this gets harder to balance as the number of actions increases

    • Also hard to balance for a wide range of $Q$ values
      • (Does it work for $Q$=[0.9, 0.9, 0.95, 0.9] and $Q$=[0.2, 0.2, 0.25, 0.2]?)
  • But this is still a pretty standard approach

    • Nice and easy to get working for special cases where there's a small number of actions and a relatively constant average $Q$ value.
  • Example: OReilly, R.C. (2006). Biologically Based Computational Models of High-Level Cognition. Science, 314, 91-94.

  • Any other options?

Biology

  • Let's look at the biology
  • Where is this action selection in the brain?
  • General consensus: the basal ganglia

  • Pretty much all of cortex connects in to this area (via the striatum)
  • Output goes to the thalamus, the central routing system of the brain
  • Disorders of this area of the brain cause problems controlling actions:
    • Parkinson's disease
      • Neurons in the substancia nigra die off
      • Extremely difficult to trigger actions to start
      • Usually physical actions; as disease progresses and more of the SNc is gone, can get cognitive effects too
    • Huntington's disease
      • Neurons in the striatum die off
      • Actions are triggered inappropriately (disinhibition)
      • Small uncontrollable movements
      • Trouble sequencing cognitive actions too
  • Also heavily implicated in reinforcement learning
    • The dopamine levels seem to map onto reward prediction error
    • High levels when get an unexpected reward, low levels when didn't get a reward that was expected

  • Connectivity diagram:

  • Old terminology:
    • "direct" pathway: cortex -> striatum -> GPi -> thalamus
    • "indirect" pathway: cortex -> striatum -> GPe -> STN -> GPi -> thalamus
  • Then they found:

    • "hyperdirect" pathway: cortex -> STN -> SPi -> thalamus
    • and lots of other connections
  • Activity in the GPi (output)

    • generally always active
    • neurons stop firing when corresponding action is chosen
    • representing [1, 1, 0, 1] instead of [0, 0, 1, 0]
  • Common approach (e.g. Leabra)

    • Each action has two groups of neurons in the striatum representing $Q(s, a_i)$ and $1-Q(s, a_i)$ ("go" and "no go")
    • Mutual inhibition causes only one of the "go" and one of the "no go" groups to fire
    • GPi neuron get connections from "go" neurons, with value multiplied by -1 (direct pathway)
    • GPi also gets connections from "no go" neurons, but multiplied by -1 (striatum->GPe), then -1 again (GPe->STN), then +1 (STN->GPi)
    • Result in GPi is close to [1, 1, 0, 1] form
  • Seems to match onto the biology okay
    • But why the weird double-inverting thing? Why not skip the GPe and STN entirely?
    • And why split into "go" and "no-go"? Just the direct pathway on its own would be fine
    • Maybe it's useful for some aspect of the learning...
    • What about all those other connections?

An alternate model of the Basal Ganglia

  • Maybe the weird structure of the basal ganglia is an attempt to do action selection without doing mutual inhibition
  • Needs to select from a large number of actions
  • Needs to do so quickly, and without strong lock-in effects

  • Gurney, Prescott, and Redgrave, 2001

  • Let's start with a very simple version

  • Sort of like an "unrolled" version of one step of mutual inhibition

  • Now let's map that onto the basal ganglia

  • But that's only going to work for very specific $Q$ values.
  • Need to dynamically adjust the amount of +ve and -ve weighting

  • This turns out to work surprisingly well
  • But extremely hard to analyze its behaviour
  • They showed that it qualitatively matches pretty well

  • So what happens if we convert this into realistic spiking neurons?

  • Use the same approach where one "neuron" in their model is a pool of neurons in the NEF
  • The "neuron model" they use was rectified linear
    • That becomes the function the decoders are computing
  • Neurotransmitter time constant are all known
  • $Q$ values are between 0 and 1
  • Firing rates max out around 50-100Hz
  • Encoders are all positive and thresholds are chosen for efficiency

In [ ]:
import nengo

model = nengo.Network('Basal Ganglia')
with model:
    mm = 1
    mp = 1
    me = 1
    mg = 1
    ws = 1
    wt = 1
    wm = 1
    wg = 1
    wp = 0.9
    we = 0.3
    e = 0.2
    ep = -0.25
    ee = -0.2
    eg = -0.2
    le = 0.2
    lg = 0.2
    tau_ampa = 0.002
    tau_gaba = 0.008
    N = 50
    D = 4
    model.config[nengo.Ensemble].radius = 1.5
    model.config[nengo.Ensemble].encoders = nengo.dists.Choice([[1]])
    
    strD1 = nengo.networks.EnsembleArray(N, D, label="StrD1", 
                intercepts=nengo.dists.Uniform(e, 1))
    strD2 = nengo.networks.EnsembleArray(N, D, label="StrD2", 
                intercepts=nengo.dists.Uniform(e, 1))
    stn = nengo.networks.EnsembleArray(N, D, label="STN", 
                intercepts=nengo.dists.Uniform(ep, 1))
    gpi = nengo.networks.EnsembleArray(N, D, label="GPi", 
                intercepts=nengo.dists.Uniform(eg, 1))
    gpe = nengo.networks.EnsembleArray(N, D, label="GPe", 
                intercepts=nengo.dists.Uniform(ee, 1))

    input = nengo.Node([0]*D, label="input")
    output = nengo.Node(label="output", size_in=D)

    # spread the input to StrD1, StrD2, and STN
    nengo.Connection(input, strD1.input, synapse=None,
                     transform=ws * (1 + lg))
    nengo.Connection(input, strD2.input, synapse=None,
                     transform=ws * (1 - le))
    nengo.Connection(input, stn.input, synapse=None,
                     transform=wt)    
                     
    # connect the striatum to the GPi and GPe (inhibitory)
    def func_str(x):
        if x < e:
            return 0
        return mm * (x - e)
    strD1.add_output('func', func_str)
    import numpy
    nengo.Connection(strD1.func,
                     gpi.input, synapse=tau_gaba,
                     transform=-numpy.eye(D) * wm)
    strD2.add_output('func', func_str)
    nengo.Connection(strD2.func,
                     gpe.input, synapse=tau_gaba,
                     transform=-numpy.eye(D) * wm)                     
    def func_stn(x):
        if x < ep:
            return 0
        return mp * (x - ep)
                     
    # connect the STN to GPi and GPe (broad and excitatory)
    tr = wp * numpy.ones((D, D))
    stn.add_output('func', func_stn)
    nengo.Connection(stn.func, gpi.input,
                     transform=tr, synapse=tau_ampa)
    nengo.Connection(stn.func, gpe.input,
                     transform=tr, synapse=tau_ampa)

    def func_gpe(x):
        if x < ee:
            return 0
        return me * (x - ee)

    # connect the GPe to GPi and STN (inhibitory)
    gpe.add_output('func', func_gpe)
    nengo.Connection(gpe.func, gpi.input, synapse=tau_gaba,
                     transform=-we)
    nengo.Connection(gpe.func, stn.input, synapse=tau_gaba,
                     transform=-wg)

    def func_gpi(x):
        if x < eg:
            return 0
        return mg * (x - eg)
    # connect GPi to output (inhibitory)
    gpi.add_output('func', func_gpi)
    nengo.Connection(gpi.func, output)
    
    nengo.Probe(output)
  • Works pretty well
  • Scales up to many actions
  • Selects quickly
  • Gets behavioural match to empirical data, including timing predictions (!)
    • Also shows interesting oscillations not seen in the original GPR model
    • But these are seen in the real basal ganglia


In [ ]:
import nengo
import nengo.spa as spa

model = spa.SPA()
with model:
    state = nengo.Ensemble(500, 16)
    
    Q = nengo.networks.EnsembleArray(50, 4, label='Q')

    vocab = spa.Vocabulary(16)   
    model.config[state].vocab = vocab
    nengo.Connection(state, Q.input, transform=[vocab.parse('DOG').v,
                                                vocab.parse('CAT').v,
                                                vocab.parse('RAT').v,
                                                vocab.parse('COW').v])

    nengo.Probe(state)
    nengo.Probe(Q.output)
    
    bg = nengo.networks.BasalGanglia(4)
    nengo.Connection(Q.output, bg.input)
    
    R = nengo.networks.EnsembleArray(50, 4, label='R',
        encoders=nengo.dists.Choice([[1]]), 
        intercepts=nengo.dists.Uniform(0.2, 1))
    nengo.Connection(bg.output, R.input)
    nengo.Probe(R.output) 
    
    import numpy
    bias = nengo.Node([1], label='bias')
    nengo.Connection(bias, R.input, transform=numpy.ones((4, 1)), synapse=None)

    nengo.Connection(R.output, R.input, transform=(numpy.eye(4)-1), synapse=0.008)
  • This system seems to work well
  • Still not perfect
  • Matches biology nicely

Action Execution

  • Now that we can select an action, how do we perform it?
  • Depends on what the action is
  • Let's start with simple actions

    • Move in a given direction
    • Remember a specific vector
    • Send a particular value as input into a particular cognitive system
  • Example:

    • State $s$ is 16-dimensional
    • Four actions (A, B, C, D)
    • Do action A if $s$ is near CAT, B if near DOG, C if near RAT, D if near COW
      • $Q(s, a_A)=s \cdot$ CAT
      • $Q(s, a_B)=s \cdot$ DOG
      • $Q(s, a_C)=s \cdot$ RAT
      • $Q(s, a_D)=s \cdot$ COW
    • To do Action A, set $m=$ MEOW
    • To do Action B, set $m=$ DOG
    • To do Action C, set $m=$ SQUEAK
    • To do Action D, set $m=$ MOO

In [ ]:
import nengo
import nengo.spa as spa

model = spa.SPA()
with model:
    state = nengo.Ensemble(500, 16)
    
    Q = nengo.networks.EnsembleArray(50, 4)

    vocab = spa.Vocabulary(16)   
    model.config[state].vocab = vocab
    nengo.Connection(state, Q.input, transform=[vocab.parse('DOG').v,
                                                vocab.parse('CAT').v,
                                                vocab.parse('RAT').v,
                                                vocab.parse('COW').v])

    nengo.Probe(state)
    nengo.Probe(Q.output)
    
    bg = nengo.networks.BasalGanglia(4)
    nengo.Connection(Q.output, bg.input)
    
    R = nengo.networks.EnsembleArray(50, 4, label='R',
        encoders=[[1]]*50, intercepts=nengo.objects.Uniform(0.2, 1))
    nengo.Connection(bg.output, R.input)
    nengo.Probe(R.output) 
    
    import numpy
    bias = nengo.Node([1], label='bias')
    nengo.Connection(bias, R.input, transform=numpy.ones((4, 1)), synapse=None)

    # mutual inhibition on the actions
    nengo.Connection(R.output, R.input, transform=(numpy.eye(4)-1), synapse=0.008)
    
    motor = nengo.Ensemble(500, 16, label='motor')
    
    nengo.Connection(R.output, motor, 
        transform=numpy.array([vocab.parse('BARK').v,
                    vocab.parse('MEOW').v,
                    vocab.parse('SQUEAK').v,
                    vocab.parse('MOO').v]).T)
    nengo.Probe(motor)
  • What about more complex actions?
  • Consider the creature we were making in class
    • Action 1: set $m$ to the direction we're told to do
    • Action 2: set $m$ to the direction we started from
  • Need to pass information from one group of neurons to another
    • But only do this when the action is chosen
    • How?
  • Well, let's use a function
    • $m = a \times d$
    • where $a$ is the action selection (0 for not selected, 1 for selected)
  • There's also another way to do this
  • A special case for forcing a function to go to zero when a particular group of neurons is active

  • Build a communication channel with an intermediate group of neurons

    • When the action is not selected, inhibit that intermediate group of neurons
  • This is a situation where it makes sense to ignore the NEF!
    • All we want to do is shut down the neural activity
    • So just do a very inhibitory connection

The Cortex-Basal Ganglia-Thalamus loop

  • We now have everything we need for a model of one of the primary structures in the mammalian brain

    • Basal ganglia: action selection
    • Thalamus: action execution
    • Cortex: everything else
  • We build systems in cortex that give some input-output functionality

    • We set up the basal ganglia and thalamus to make use of that functionality appropriately
  • This sort of model is going to get complicated, so we've added a wrapper to help build everything:


In [ ]:
import nengo
import nengo.spa as spa

model = spa.SPA(label="SPA1")
with model:
    model.state = spa.Buffer(16)
    model.motor = spa.Buffer(16)
    actions = spa.Actions(
        'dot(state, DOG) --> motor=BARK',
        'dot(state, CAT) --> motor=MEOW',
        'dot(state, RAT) --> motor=SQUEAK',
        'dot(state, COW) --> motor=MOO',
    )    
    model.bg = spa.BasalGanglia(actions)
    model.thalamus = spa.Thalamus(model.bg)
    
    nengo.Probe(model.state.state.output)
    nengo.Probe(model.motor.state.output)
    nengo.Probe(model.thalamus.actions.output)
  • Another Example
    • Cortex stores some state (integrator)
    • Add some state transition rules
      • If in state A, go to state B
      • If in state B, go to state C
      • If in state C, go to state D
      • ...
    • $Q(s, a_i) = s \cdot a_i$
    • The effect of each action is to input the corresponding vector into the integrator

In [ ]:
import nengo
import nengo.spa as spa

model = spa.SPA(label="SPA1")
with model:
    model.state = spa.Memory(16)
    actions = spa.Actions(
        'dot(state, A) --> state=B',
        'dot(state, B) --> state=C',
        'dot(state, C) --> state=D',
        'dot(state, D) --> state=E',
        'dot(state, E) --> state=A',
    )    
    model.bg = spa.BasalGanglia(actions)
    model.thalamus = spa.Thalamus(model.bg)
    
    model.input = spa.Input(state=lambda t: 'A' if t < 0.1 else '0')
    nengo.Probe(model.state.state.output)
    nengo.Probe(model.thalamus.actions.output)
  • But that's all using the simple actions
  • What about an action that involves taking information from one neural system and sending it to another?
  • Let's have a separate visual state and use if to put information into the changing state

In [ ]:
import nengo
import nengo.spa as spa

model = spa.SPA(label="SPA1")
with model:
    model.state = spa.Memory(16)
    model.vision = spa.Buffer(16)
    actions = spa.Actions(
        'dot(vision, LETTER) --> state=vision',
        'dot(state, A) --> state=B',
        'dot(state, B) --> state=C',
        'dot(state, C) --> state=D',
        'dot(state, D) --> state=E',
        'dot(state, E) --> state=A',
    )    
    model.bg = spa.BasalGanglia(actions)
    model.thalamus = spa.Thalamus(model.bg)
    
    model.input = spa.Input(state=lambda t: 'LETTER+A' if t < 0.1 else '0')
    
    nengo.Probe(model.state.state.output)
    nengo.Probe(model.thalamus.actions.output)

Behavioural Evidence

  • So this lets us build more complex models
  • Is there any evidence that this is the way it works in brains?
    • Consistent with anatomy/connectivity
  • What about behavioural?
  • Sort of
  • Timing data
    • How long does it take to do an action?
    • There are lots of existing computational (non-neural) cognitive models that have something like this action selection loop
    • Usually all-symbolic
      • A set of IF-THEN rules
    • e.g. ACT-R
      • Used to model mental arithmetic, driving a car, using a GUI, air-traffic control, staffing a battleship, etc etc
    • Best fit across all these situations is to set the loop time to 50ms
  • how long does this model take?

    • Notice that all the timing is based on neural properties, not the algorithm
    • Dominated by the longer neurotransmitter time constants in the basal ganglia
  • This is in the right ballpark

  • But what about this distinction between the two types of actions?
  • This is a nice example of the usefulness of making neural models!
    • This distinction wasn't obvious from computational implementations

More complex tasks

  • Lots of complex tasks can be modelled this way
    • Some basic cognitive components (cortex)
    • action selection system (basal ganglia and thalamus)
  • The tricky part is figuring out the actions
  • Example: the Tower of Hanoi task
    • 3 pegs
    • N disks of different sizes on the pegs
    • move from one configuration to another
    • can only move one disk at a time
    • no larger disk can be on a smaller disk

  • can we build rules to do this?
  • How do people do this task?
    • Studied extensively by cognitive scientists
    • Simon (1975):
      1. Find the largest disk not in its goal position and make the goal to get it in that position. This is the initial “goal move” for purposes of the next two steps. If all disks are in their goal positions, the problem is solved
      2. If there are any disks blocking the goal move, find the largest blocking disk (either on top of the disk to be moved or at the destination peg) and make the new goal move to move this blocking disk to the other peg (i.e., the peg that is neither the source nor destination of this disk). The previous goal move is stored as the parent goal of the new goal move. Repeat this step with the new goal move.
      3. If there are no disks blocking the goal move perform the goal move and (a) If the goal move had a parent goal retrieve that parent goal, make it the goal move, and go back to step 2. (b) If the goal had no parent goal, go back to step 1.
  • What do the actions look like?
  • State:

    • goal: what disk am I trying to move (D0, D1, D2)
    • focus: what disk am I looking at (D0, D1, D2)
    • goal_peg: where is the disk I am trying to move (A, B, C)
    • focus_peg: where is the disk I am looking at (A, B, C)
    • target_peg: where am I trying to move a disk to (A, B, C)
    • goal_final: what is the overall final desired location of the disk I'm trying to move (A, B, C)
  • Note: we're not yet modelling all the sensory and memory stuff here, so we manually set things like goal_final.

  • Action effects: when an action is selected, it could do the following

    • set focus
    • set goal
    • set goal_peg
    • actually try to move a disk to a given location by setting move and move_peg
      • Note: we're also not modelling the full motor system, so we fake this too
  • Is this sufficient to implement the algorithm described above?

  • What do the action rules look like?
    • if focus=NONE then focus=D2, goal=D2, goal_peg=goal_final
      • $Q$=focus $\cdot$ NONE
    • if focus=D2 and goal=D2 and goal_peg!=target_peg then focus=D1
      • $Q$=focus $\cdot$ D2 + goal $\cdot$ D2 - goal_peg $\cdot$ target_peg
    • if focus=D2 and goal=D2 and goal_peg==target_peg then focus=D1, goal=D1, goal_peg=goal_final
    • if focus=D1 and goal=D1 and goal_peg!=target_peg then focus=D0
    • if focus=D1 and goal=D1 and goal_peg==target_peg then focus=D0, goal=D0, goal_peg=goal_final
    • if focus=D0 and goal_peg==target_peg then focus=NONE
    • if focus=D0 and goal=D0 and goal_peg!=target_peg then focus=NONE, move=D0, move_peg=target_peg
    • if focus!=goal and focus_peg==goal_peg and target_peg!=focus_peg then goal=focus, goal_peg=A+B+C-target_peg-focus_peg
      • trying to move something, but smaller disk is on top of this one
    • if focus!=goal and focus_peg!=goal_peg and target_peg==focus_peg then goal=focus, goal_peg=A+B+C-target_peg-goal_peg
      • trying to move something, but smaller disk is on top of target peg
    • if focus=D0 and goal!=D0 and target_peg!=focus_peg and target_peg!=goal_peg and focus_peg!=goal_peg then move=goal, move_peg=target_peg
      • move the disk, since there's nothing in the way
    • if focus=D1 and goal!=D1 and target_peg!=focus_peg and target_peg!=goal_peg and focus_peg!=goal_peg then focus=D0
      • check the next disk
  • Sufficient to solve any version of the problem
  • Is it what people do?
  • How can we tell?
  • Do science

    • What predictions does the theory make
    • Errors?
    • Reaction times?
    • Neural activity?
    • fMRI?
  • Timing: